231 research outputs found

    Proving Nontermination via safety

    Get PDF
    We show how the problem of nontermination proving can be reduced to a question of underapproximation search guided by a safety prover. This reduction leads to new nontermination proving implementation strategies based on existing tools for safety proving. Our preliminary implementation beats existing tools. Furthermore, our approach leads to easy support for programs with unbounded nondeterminism

    Coeffects: Unified static analysis of context-dependence

    Get PDF
    Monadic effect systems provide a unified way of tracking effects of computations, but there is no unified mechanism for tracking how computations rely on the environment in which they are executed. This is becoming an important problem for modern software – we need to track where distributed computations run, which resources a program uses and how they use other capabilities of the environment. We consider three examples of context-dependence analysis: liveness analysis, tracking the use of implicit parameters (similar to tracking of resource usage in distributed computation), and calculating caching requirements for dataflow programs. Informed by these cases, we present a unified calculus for tracking context dependence in functional languages together with a categorical semantics based on indexed comonads. We believe that indexed comonads are the right foundation for constructing context-aware languages and type systems and that following an approach akin to monads can lead to a widespread use of the concept

    Automating Deductive Verification for Weak-Memory Programs

    Full text link
    Writing correct programs for weak memory models such as the C11 memory model is challenging because of the weak consistency guarantees these models provide. The first program logics for the verification of such programs have recently been proposed, but their usage has been limited thus far to manual proofs. Automating proofs in these logics via first-order solvers is non-trivial, due to reasoning features such as higher-order assertions, modalities and rich permission resources. In this paper, we provide the first implementation of a weak memory program logic using existing deductive verification tools. We tackle three recent program logics: Relaxed Separation Logic and two forms of Fenced Separation Logic, and show how these can be encoded using the Viper verification infrastructure. In doing so, we illustrate several novel encoding techniques which could be employed for other logics. Our work is implemented, and has been evaluated on examples from existing papers as well as the Facebook open-source Folly library.Comment: Extended version of TACAS 2018 publicatio

    Bottom-Up Shape Analysis

    Full text link
    In this paper we present a new shape analysis algorithm. The key distinguishing aspect of our algorithm is that it is completely compositional, bottom-up and non-iterative. We present our algorithm as an inference system for computing Hoare triples summarizing heap manipulating programs. Our inference rules are compositional: Hoare triples for a compound statement are computed from the Hoare triples of its component statements. These inference rules are used as the basis for a bottom-up shape analysis of programs. Specifically, we present a logic of iterated separation formula (LISF) which uses the iterated separating conjunct of Reynolds [17] to represent program states. A key ingredient of our inference rules is a strong biabduction operation between two logical formulas. We describe sound strong bi-abduction and satisfiability decision procedures for LISF. We have built a prototype tool that implements these inference rules and have evaluated it on standard shape analysis benchmark programs. Preliminary results show that our tool can generate expressive summaries, which are complete functional specifications in many cases

    Structural Refinement for the Modal nu-Calculus

    Get PDF
    We introduce a new notion of structural refinement, a sound abstraction of logical implication, for the modal nu-calculus. Using new translations between the modal nu-calculus and disjunctive modal transition systems, we show that these two specification formalisms are structurally equivalent. Using our translations, we also transfer the structural operations of composition and quotient from disjunctive modal transition systems to the modal nu-calculus. This shows that the modal nu-calculus supports composition and decomposition of specifications.Comment: Accepted at ICTAC 201

    Towards Scientific Incident Response

    Get PDF
    A scientific incident analysis is one with a methodical, justifiable approach to the human decision-making process. Incident analysis is a good target for additional rigor because it is the most human-intensive part of incident response. Our goal is to provide the tools necessary for specifying precisely the reasoning process in incident analysis. Such tools are lacking, and are a necessary (though not sufficient) component of a more scientific analysis process. To reach this goal, we adapt tools from program verification that can capture and test abductive reasoning. As Charles Peirce coined the term in 1900, “Abduction is the process of forming an explanatory hypothesis. It is the only logical operation which introduces any new idea.” We reference canonical examples as paradigms of decision-making during analysis. With these examples in mind, we design a logic capable of expressing decision-making during incident analysis. The result is that we can express, in machine-readable and precise language, the abductive hypotheses than an analyst makes, and the results of evaluating them. This result is beneficial because it opens up the opportunity of genuinely comparing analyst processes without revealing sensitive system details, as well as opening an opportunity towards improved decision-support via limited automation

    Local reasoning about the presence of bugs: Incorrectness Separation Logic

    Get PDF
    There has been a large body of work on local reasoning for proving the absence of bugs, but none for proving their presence. We present a new formal framework for local reasoning about the presence of bugs, building on two complementary foundations: 1) separation logic and 2) incorrectness logic. We explore the theory of this new incorrectness separation logic (ISL), and use it to derive a begin-anywhere, intra-procedural symbolic execution analysis that has no false positives by construction. In so doing, we take a step towards transferring modular, scalable techniques from the world of program verification to bug catching
    • 

    corecore